103 research outputs found
MASTERKEY: Practical Backdoor Attack Against Speaker Verification Systems
Speaker Verification (SV) is widely deployed in mobile systems to
authenticate legitimate users by using their voice traits. In this work, we
propose a backdoor attack MASTERKEY, to compromise the SV models. Different
from previous attacks, we focus on a real-world practical setting where the
attacker possesses no knowledge of the intended victim. To design MASTERKEY, we
investigate the limitation of existing poisoning attacks against unseen
targets. Then, we optimize a universal backdoor that is capable of attacking
arbitrary targets. Next, we embed the speaker's characteristics and semantics
information into the backdoor, making it imperceptible. Finally, we estimate
the channel distortion and integrate it into the backdoor. We validate our
attack on 6 popular SV models. Specifically, we poison a total of 53 models and
use our trigger to attack 16,430 enrolled speakers, composed of 310 target
speakers enrolled in 53 poisoned models. Our attack achieves 100% attack
success rate with a 15% poison rate. By decreasing the poison rate to 3%, the
attack success rate remains around 50%. We validate our attack in 3 real-world
scenarios and successfully demonstrate the attack through both over-the-air and
over-the-telephony-line scenarios.Comment: Accepted by Mobicom 202
PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection
In this paper, we propose PhantomSound, a query-efficient black-box attack
toward voice assistants. Existing black-box adversarial attacks on voice
assistants either apply substitution models or leverage the intermediate model
output to estimate the gradients for crafting adversarial audio samples.
However, these attack approaches require a significant amount of queries with a
lengthy training stage. PhantomSound leverages the decision-based attack to
produce effective adversarial audios, and reduces the number of queries by
optimizing the gradient estimation. In the experiments, we perform our attack
against 4 different speech-to-text APIs under 3 real-world scenarios to
demonstrate the real-time attack impact. The results show that PhantomSound is
practical and robust in attacking 5 popular commercial voice controllable
devices over the air, and is able to bypass 3 liveness detection mechanisms
with >95% success rate. The benchmark result shows that PhantomSound can
generate adversarial examples and launch the attack in a few minutes. We
significantly enhance the query efficiency and reduce the cost of a successful
untargeted and targeted adversarial attack by 93.1% and 65.5% compared with the
state-of-the-art black-box attacks, using merely ~300 queries (~5 minutes) and
~1,500 queries (~25 minutes), respectively.Comment: RAID 202
Aligning Linguistic Words and Visual Semantic Units for Image Captioning
Image captioning attempts to generate a sentence composed of several
linguistic words, which are used to describe objects, attributes, and
interactions in an image, denoted as visual semantic units in this paper. Based
on this view, we propose to explicitly model the object interactions in
semantics and geometry based on Graph Convolutional Networks (GCNs), and fully
exploit the alignment between linguistic words and visual semantic units for
image captioning. Particularly, we construct a semantic graph and a geometry
graph, where each node corresponds to a visual semantic unit, i.e., an object,
an attribute, or a semantic (geometrical) interaction between two objects.
Accordingly, the semantic (geometrical) context-aware embeddings for each unit
are obtained through the corresponding GCN learning processers. At each time
step, a context gated attention module takes as inputs the embeddings of the
visual semantic units and hierarchically align the current word with these
units by first deciding which type of visual semantic unit (object, attribute,
or interaction) the current word is about, and then finding the most correlated
visual semantic units under this type. Extensive experiments are conducted on
the challenging MS-COCO image captioning dataset, and superior results are
reported when comparing to state-of-the-art approaches.Comment: 8 pages, 5 figures. Accepted by ACM MM 201
Optimal Spatial-Temporal Triangulation for Bearing-Only Cooperative Motion Estimation
Vision-based cooperative motion estimation is an important problem for many
multi-robot systems such as cooperative aerial target pursuit. This problem can
be formulated as bearing-only cooperative motion estimation, where the visual
measurement is modeled as a bearing vector pointing from the camera to the
target. The conventional approaches for bearing-only cooperative estimation are
mainly based on the framework distributed Kalman filtering (DKF). In this
paper, we propose a new optimal bearing-only cooperative estimation algorithm,
named spatial-temporal triangulation, based on the method of distributed
recursive least squares, which provides a more flexible framework for designing
distributed estimators than DKF. The design of the algorithm fully incorporates
all the available information and the specific triangulation geometric
constraint. As a result, the algorithm has superior estimation performance than
the state-of-the-art DKF algorithms in terms of both accuracy and convergence
speed as verified by numerical simulation. We rigorously prove the exponential
convergence of the proposed algorithm. Moreover, to verify the effectiveness of
the proposed algorithm under practical challenging conditions, we develop a
vision-based cooperative aerial target pursuit system, which is the first of
such fully autonomous systems so far to the best of our knowledge
- …